25 research outputs found

    To Learn or Not to Learn Features for Deformable Registration?

    Full text link
    Feature-based registration has been popular with a variety of features ranging from voxel intensity to Self-Similarity Context (SSC). In this paper, we examine the question on how features learnt using various Deep Learning (DL) frameworks can be used for deformable registration and whether this feature learning is necessary or not. We investigate the use of features learned by different DL methods in the current state-of-the-art discrete registration framework and analyze its performance on 2 publicly available datasets. We draw insights into the type of DL framework useful for feature learning and the impact, if any, of the complexity of different DL models and brain parcellation methods on the performance of discrete registration. Our results indicate that the registration performance with DL features and SSC are comparable and stable across datasets whereas this does not hold for low level features.Comment: 9 pages, 4 figure

    Fast Learning-based Registration of Sparse 3D Clinical Images

    Full text link
    We introduce SparseVM, a method that registers clinical-quality 3D MR scans both faster and more accurately than previously possible. Deformable alignment, or registration, of clinical scans is a fundamental task for many clinical neuroscience studies. However, most registration algorithms are designed for high-resolution research-quality scans. In contrast to research-quality scans, clinical scans are often sparse, missing up to 86% of the slices available in research-quality scans. Existing methods for registering these sparse images are either inaccurate or extremely slow. We present a learning-based registration method, SparseVM, that is more accurate and orders of magnitude faster than the most accurate clinical registration methods. To our knowledge, it is the first method to use deep learning specifically tailored to registering clinical images. We demonstrate our method on a clinically-acquired MRI dataset of stroke patients and on a simulated sparse MRI dataset. Our code is available as part of the VoxelMorph package at http://voxelmorph.mit.edu/.Comment: This version was accepted to CHIL. It builds on the previous version of the paper and includes more experimental result

    The cooling concept of the X-ray telescope eROSITA

    No full text

    Image-and-spatial transformer networks for structure-guided image registration

    No full text
    mage registration with deep neural networks has become anactive field of research and exciting avenue for a long standing problem inmedical imaging. The goal is to learn a complex function that maps theappearance of input image pairs to parameters of a spatial transforma-tion in order to align corresponding anatomical structures. We argue andshow that the current direct, non-iterative approaches are sub-optimal,in particular if we seek accurate alignment of Structures-of-Interest (SoI).Information about SoI is often available at training time, for example,in form of segmentations or landmarks. We introduce a novel, genericframework, Image-and-Spatial Transformer Networks (ISTNs), to lever-age SoI information allowing us to learn new image representations thatare optimised for the downstream registration task. Thanks to these rep-resentations we can employ a test-specific, iterative refinement over thetransformation parameters which yields highly accurate registration evenwith very limited training data. Performance is demonstrated on pairwise3D brain registration and illustrative synthetic data

    Parallel Transport of Surface Deformations from Pole Ladder to Symmetrical Extension

    Get PDF
    International audienceCardiac motion contains information underlying disease development , and complements the anatomical information extracted for each subject. However, normalization of temporal trajectories is necessary due to anatomical differences between subjects. In this study, we encode inter-subject shape variations and temporal deformations in a common space of diffeomorphic registration. They are parameterized by stationary velocity fields. Previous normalization algorithms applied in medical imaging were first order approximations of parallel transport. In contrast, pole ladder was recently shown to be a third order scheme in general affine connection spaces and exact in one step in affine symmetric spaces. We further improve this procedure with a more symmetric mapping scheme, which relies on geodesic symmetries around mid-points. We apply the method to analyze cardiac motion among pulmonary hyperten-sion populations. Evaluation is performed on a 4D cardiac database, with meshes of the right-ventricle obtained by commercial speckle-tracking from echo-cardiogram. We assess the stability of the algorithms by computing their numerical inverse error. Our method turns out to be very accurate and efficient in terms of compactness for subspace representation

    Probabilistic Motion Modeling from Medical Image Sequences: Application to Cardiac Cine-MRI

    Get PDF
    Probabilistic Motion Model, Motion Tracking, Temporal Super-Resolution, Diffeomorphic Registration, Temporal Variational AutoencoderInternational audienceWe propose to learn a probabilistic motion model from a sequence of images. Besides spatio-temporal registration, our method offers to predict motion from a limited number of frames, useful for temporal super-resolution. The model is based on a probabilistic latent space and a novel temporal dropout training scheme. This enables simulation and interpolation of realistic motion patterns given only one or any subset of frames of a sequence. The encoded motion also allows to be transported from one subject to another without the need of inter-subject registration. An unsupervised generative deformation model is applied within a temporal convolutional network which leads to a diffeomorphic motion model, encoded as a low-dimensional motion matrix. Applied to cardiac cine-MRI sequences, we show improved registration accuracy and spatio-temporally smoother deformations compared to three state-of-the-art registration algorithms. Besides, we demonstrate the model's applicability to motion transport by simulating a pathology in a healthy case. Furthermore, we show an improved motion reconstruction from incomplete sequences compared to linear and cubic interpolation

    Multimodal image alignment through a multiscale chain of neural networks with application to remote sensing

    Get PDF
    International audienceWe tackle here the problem of multimodal image non-rigid registration, which is of prime importance in remote sensing and medical imaging. The difficulties encountered by classical registration approaches include feature design and slow optimization by gradient descent. By analyzing these methods, we note the significance of the notion of scale. We design easy-to-train, fully-convolutional neural networks able to learn scale-specific features. Once chained appropriately, they perform global registration in linear time, getting rid of gradient descent schemes by predicting directly the deformation. We show their performance in terms of quality and speed through various tasks of remote sensing multimodal image alignment. In particular, we are able to register correctly cadastral maps of buildings as well as road polylines onto RGB images, and outperform current keypoint matching methods

    Unsupervised Learning for Fast Probabilistic Diffeomorphic Registration

    No full text
    Traditional deformable registration techniques achieve impressive results and offer a rigorous theoretical treatment, but are computationally intensive since they solve an optimization problem for each image pair. Recently, learning-based methods have facilitated fast registration by learning spatial deformation functions. However, these approaches use restricted deformation models, require supervised labels, or do not guarantee a diffeomorphic (topology-preserving) registration. Furthermore, learning-based registration tools have not been derived from a probabilistic framework that can offer uncertainty estimates. In this paper, we present a probabilistic generative model and derive an unsupervised learning-based inference algorithm that makes use of recent developments in convolutional neural networks (CNNs). We demonstrate our method on a 3D brain registration task, and provide an empirical analysis of the algorithm. Our approach results in state of the art accuracy and very fast runtimes, while providing diffeomorphic guarantees and uncertainty estimates. Our implementation is available online at http://voxelmorph.csail.mit.edu
    corecore